perm filename MRV[0,BGB] blob
sn#072751 filedate 1973-11-17 generic text, type C, neo UTF8
COMMENT ā VALID 00008 PAGES
C REC PAGE DESCRIPTION
C00001 00001
C00002 00002
C00004 00003 THE GOAL.
C00005 00004 THE GROUND RULES.
C00008 00005 Dead Reckoning should be minimal. Dead Reckoning is a
C00010 00006 SYSTEMS
C00012 00007 PROCESSING SEQUENCES.
C00014 00008 Photometry
C00016 ENDMK
Cā;
CONTENTS
GOALS.
DATA.
1. TV images.
2. Features 2D.
3. Blobs 2d.
4. Features 3D.
5. Bodies 3D.
6. Elevation Map.
7. Cart Course Map.
PROGRAMMING.
1. Cart Running.
2. Image Analysis.
3. Locus Solving.
4. World Modeling.
5. Image Synthesis.
HARDWARE.
THE GOAL.
The goal of the Cart Project is to get a computer controlled
cart to see by means of a TV camera so that it can drive around
outside the laboratory.
THE GROUND RULES.
First, the robot must operate in the Real World. Reality of
course is subjective, good robot work can be done in a simulated
world or in a synthetic world; however it is part of the goal of the
Cart Project to deal with the world of roadways, parking lots, grassy
hills, eucalyptus trees, sun, sky, dirt and horse manure that is
found outside of our laboratory.
Second, the robot is allowed to have a map. In fact the map is
not merely an incidental thing that is or is not in the glove
compartment, but is essential to robot vision. The map or "world
model" is the robot's internal concept of the world it sees.
Manual initialization of the world model is allowed. Although
a sophisticated robot should be able to automatically acquire a world
model or map as it goes along, the mere representation of the map in
a computer is of such difficult that I have had to settle for
manually constructing a world model. Programmers should always be
warned by the old proverb: You can't automate a process you don't
know nothun' about.
A line following automaton is not a solution. A line follower
can readily be made with a couple of photocells and although it is
harder to do it with a computer and a TV camera it has been done and
didn't readily lend itself to generality.
Dead Reckoning should be minimal. Dead Reckoning is a
non-visual means of knowing where the camera is at. To the extent
that a robot dead reckons, to that extent is its visual organ
irrelevant to navigation. In the limit a robot with a precise
locomotion system and an accurate map could drive around blindly.
Non-Visual aides to navigation aren't kosher. The cart is
more a statement of a visual perception problem than it is a computer
control cart. At present the cart has no inertial guidance, feelers,
speedometer, odometer or compass and I am reluctant to add them
because they are non visual aides to naviagtion. However, I would
like to have non visual means of measuring the position of the
wheels, pan, tilt, target voltage and f-stop of the camera; which
indicates that I do not strictly follow the no non visual aides
ground rule.
The world can be assumed to be essentially static. Namely the
only things that move are the cart and the sun. Well now if the
robot can't deal with a static world then it won't have a chance in a
dynamic one - or - a dynamic world can be sucessfully modeled by
using a large number of static worlds which differ slightly.
SYSTEMS
The Cart's system problem is that the code and data for cart
vision and control is larger than core. I have tried multi-jobbing,
shared segments, overlays, ptys, mail, user interrupts, and peeking.
I conclude that building sophisticated sub systems does not
lead directly to clear and powerful vision software. What I now am
doing is writing a set of programs which have three steps: Input,
Compute, and Output.
Accordingly, the Cart Running Sequence is merely a serial
loop of program runs with no parallel processing and all inter job
communication is done via the disk file system.
PROCESSING SEQUENCES.
For our first solution, the cart will operate in a LOOK,
THINK, MOVE loop.
First it LOOKS, it takes a TV picture. Then it
THINKS, it predicts what is in the picture then verifies that what it
anticipated is there, and if it finds what it wants it measures it
and deduces where the picture was taken from and how far off that is
from where the cart wants to be. And then it MOVES, the steering
wheels are moved a certain amount if necessary and then the drive
motors are activated to move the cart blindly along its way from one
to twnety feet.
A. Model Making Sequence.
Ad Hoc Programming.
Manual Commands.
Image Analysis - computer assisted.
B. Cart Running Sequence.
Initialization Tell the cart where it is.
INPUT Take a television picture.
Prediction Elements of the image are anticipated.
Analysis Image elements are sought in the new image.
Verification
Measurement Compute the locus of the camera.
Course Calculation
OUTPUT Move the cart.
Photometry
Sure its important to find the edges and T-joints in an
image, however knowing why a particular point in a particular image
is as bright as it is, is equally important, and can solve the same
sorts of problems and hasn't received enough attention.
Basically my solution to the cart's problem is to model it to
death. Namely enter into the computer everything that an advanced
exploratory robot might be able to report back AFTER it has gone over
an area. Then I imagine that our primitive cart robot has the duty
merely to check on the earlier exploration and verify and update it.
The highest cognitive level of the cart software is a moral
imperative to trace along a predetermined course.